49 research outputs found

    Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks

    Full text link
    Bilateral filters have wide spread use due to their edge-preserving properties. The common use case is to manually choose a parametric filter type, usually a Gaussian filter. In this paper, we will generalize the parametrization and in particular derive a gradient descent algorithm so the filter parameters can be learned from data. This derivation allows to learn high dimensional linear filters that operate in sparsely populated feature spaces. We build on the permutohedral lattice construction for efficient filtering. The ability to learn more general forms of high-dimensional filters can be used in several diverse applications. First, we demonstrate the use in applications where single filter applications are desired for runtime reasons. Further, we show how this algorithm can be used to learn the pairwise potentials in densely connected conditional random fields and apply these to different image segmentation tasks. Finally, we introduce layers of bilateral filters in CNNs and propose bilateral neural networks for the use of high-dimensional sparse data. This view provides new ways to encode model structure into network architectures. A diverse set of experiments empirically validates the usage of general forms of filters

    Permutohedral Lattice CNNs

    Full text link
    This paper presents a convolutional layer that is able to process sparse input features. As an example, for image recognition problems this allows an efficient filtering of signals that do not lie on a dense grid (like pixel position), but of more general features (such as color values). The presented algorithm makes use of the permutohedral lattice data structure. The permutohedral lattice was introduced to efficiently implement a bilateral filter, a commonly used image processing operation. Its use allows for a generalization of the convolution type found in current (spatial) convolutional network architectures

    Filopodyan: An open-source pipeline for the analysis of filopodia

    Get PDF
    Filopodia have important sensory and mechanical roles in motile cells. The recruitment of actin regulators, such as ENA/ VASP proteins, to sites of protrusion underlies diverse molecular mechanisms of filopodia formation and extension. We developed Filopodyan (filopodia dynamics analysis) in Fiji and R to measure fluorescence in filopodia and at their tips and bases concurrently with their morphological and dynamic properties. Filopodyan supports high-throughput phenotype characterization as well as detailed interactive editing of filopodia reconstructions through an intuitive graphical user interface. Our highly customizable pipeline is widely applicable, capable of detecting filopodia in four different cell types in vitro and in vivo. We use Filopodyan to quantify the recruitment of ENA and VASP preceding filopodia formation in neuronal growth cones, and uncover a molecular heterogeneity whereby different filopodia display markedly different responses to changes in the accumulation of ENA and VASP fluorescence in their tips over time.J.L. Gallop and V. Urbančič are supported by the Wellcome Trust (WT095829AIA). J. Mason and B. Richier are supported by the European Research Council (281971). C.E. Holt is supported by the Wellcome Trust (program grant 085314) and the European Research Council (advanced grant 322817). The Gurdon Institute is funded by the Wellcome Trust (203144) and Cancer Research UK (C6946/A24843)

    Magnetic resonance imaging, computed tomography, and 68Ga-DOTATOC positron emission tomography for imaging skull base meningiomas with infracranial extension treated with stereotactic radiotherapy - a case series

    Get PDF
    <p>Abstract</p> <p>Introduction</p> <p>Magnetic resonance imaging (MRI) and computed tomography (CT) with <sup>68</sup>Ga-DOTATOC positron emission tomography (<sup>68</sup>Ga-DOTATOC-PET) were compared retrospectively for their ability to delineate infracranial extension of skull base (SB) meningiomas treated with fractionated stereotactic radiotherapy.</p> <p>Methods</p> <p>Fifty patients with 56 meningiomas of the SB underwent MRI, CT, and <sup>68</sup>Ga-DOTATOC PET/CT prior to fractionated stereotactic radiotherapy. The study group consisted of 16 patients who had infracranial meningioma extension, visible on MRI ± CT (MRI/CT) <it>or </it>PET, and were evaluated further. The respective findings were reviewed independently, analyzed with respect to correlations, and compared with each other.</p> <p>Results</p> <p>Within the study group, SB transgression was associated with bony changes visible by CT in 14 patients (81%). Tumorous changes of the foramen ovale and rotundum were evident in 13 and 8 cases, respectively, which were accompanied by skeletal muscular invasion in 8 lesions. We analysed six designated anatomical sites of the SB in each of the 16 patients. Of the 96 sites, 42 had infiltration that was delineable by MRI/CT and PET in 35 cases and by PET only in 7 cases. The mean infracranial volume that was delineable in PET was 10.1 ± 10.6 cm<sup>3</sup>, which was somewhat larger than the volume detectable in MRI/CT (8.4 ± 7.9 cm<sup>3</sup>).</p> <p>Conclusions</p> <p><sup>68</sup>Ga-DOTATOC-PET allows detection and assessment of the extent of infracranial meningioma invasion. This method seems to be useful for planning fractionated stereotactic radiation when used in addition to conventional imaging modalities that are often inconclusive in the SB region.</p

    Design of a dual species atom interferometer for space

    Get PDF
    Atom interferometers have a multitude of proposed applications in space including precise measurements of the Earth's gravitational field, in navigation & ranging, and in fundamental physics such as tests of the weak equivalence principle (WEP) and gravitational wave detection. While atom interferometers are realized routinely in ground-based laboratories, current efforts aim at the development of a space compatible design optimized with respect to dimensions, weight, power consumption, mechanical robustness and radiation hardness. In this paper, we present a design of a high-sensitivity differential dual species 85^{85}Rb/87^{87}Rb atom interferometer for space, including physics package, laser system, electronics and software. The physics package comprises the atom source consisting of dispensers and a 2D magneto-optical trap (MOT), the science chamber with a 3D-MOT, a magnetic trap based on an atom chip and an optical dipole trap (ODT) used for Bose-Einstein condensate (BEC) creation and interferometry, the detection unit, the vacuum system for 101110^{-11} mbar ultra-high vacuum generation, and the high-suppression factor magnetic shielding as well as the thermal control system. The laser system is based on a hybrid approach using fiber-based telecom components and high-power laser diode technology and includes all laser sources for 2D-MOT, 3D-MOT, ODT, interferometry and detection. Manipulation and switching of the laser beams is carried out on an optical bench using Zerodur bonding technology. The instrument consists of 9 units with an overall mass of 221 kg, an average power consumption of 608 W (819 W peak), and a volume of 470 liters which would well fit on a satellite to be launched with a Soyuz rocket, as system studies have shown.Comment: 30 pages, 23 figures, accepted for publication in Experimental Astronom

    Current concepts in clinical radiation oncology

    Get PDF

    Learning an event sequence embedding for event-based deep stereo

    No full text
    Today, a frame-based camera is the sensor of choice for machine vision applications. However, these cameras, originally developed for acquisition of static images rather than for sensing of dynamic uncontrolled visual environments, suffer from high power consumption, data rate, latency and low dynamic range. An event-based image sensor addresses these drawbacks by mimicking a biological retina. Instead of measuring the intensity of every pixel in a fixed time-interval, it reports events of significant pixel intensity changes. Every such event is represented by its position, sign of change, and timestamp, accurate to the microsecond. Asynchronous event sequences require special handling, since traditional algorithms work only with synchronous, spatially gridded data. To address this problem we introduce a new module for event sequence embedding, for use in difference applications. The module builds a representation of an event sequence by firstly aggregating information locally across time, using a novel fully-connected layer for an irregularly sampled continuous domain, and then across discrete spatial domain. Based on this module, we design a deep learning-based stereo method for event-based cameras. The proposed method is the first learning-based stereo method for an event-based camera and the only method that produces dense results. We show that large performance increases on the Multi Vehicle Stereo Event Camera Dataset (MVSEC), which became the standard set for benchmarking of event-based stereo methods

    Recovering Intrinsic Images with a Global Sparsity Prior On Reflectance

    No full text
    We address the challenging task of decoupling material properties from lighting properties given a single image. In the last two decades virtually all works have concentrated on exploiting edge information to address this problem. We take a different route by introducing a new prior on reflectance, that models reflectance values as being drawn from a sparse set of basis colors. This results in a Random Field model with global, latent variables (basis colors) and pixel-accurate output reflectance values. We show that without edge information high-quality results can be achieved, that are on par with methods exploiting this source of information. Finally, we are able to improve on state-of-the-art results by integrating edge information into our model. We believe that our new approach is an excellent starting point for future developments in this field
    corecore